Task-Independent Knowledge Makes for Transferable Representations for Generalized Zero-Shot Learning

نویسندگان

چکیده

Generalized Zero-Shot Learning (GZSL) targets recognizing new categories by learning transferable image representations. Existing methods find that, aligning representations with corresponding semantic labels, the semantic-aligned can be transferred to unseen categories. However, supervised only seen category learned knowledge is highly task-specific, which makes biased towards In this paper, we propose a novel Dual-Contrastive Embedding Network (DCEN) that simultaneously learns task-specific and task-independent via alignment instance discrimination. First, DCEN leverages task labels cluster of same cross-modal contrastive exploring semantic-visual complementarity. Besides knowledge, then introduces attracting different views repelling images. Compared high-level supervision, discrimination supervision encourages capture low-level visual less toward alleviates representation bias. Consequently, jointly make for DCEN, obtains averaged 4.1% improvement on four public benchmarks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Unified approach for Conventional Zero-shot, Generalized Zero-shot and Few-shot Learning

Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot and few-shot learning problems. Our approach is based on a novel Class Adapting Principal Directions (CAPD) concept that allows multiple embeddings of image features into a semantic space. Given an image, our ...

متن کامل

Communicating Hierarchical Neural Controllers for Learning Zero-shot Task Generalization

The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions...

متن کامل

Generalized Zero-Shot Learning via Synthesized Examples

We present a generative framework for generalized zeroshot learning where the training and test classes are not necessarily disjoint. Built upon a variational autoencoder based architecture, consisting of a probabilistic encoder and a probabilistic conditional decoder, our model can generate novel exemplars from seen/unseen classes, given their respective class attributes. These exemplars can s...

متن کامل

Using Task Features for Zero-Shot Knowledge Transfer in Lifelong Learning

Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly fr...

متن کامل

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

As a step towards developing zero-shot task generalization capabilities in reinforcement learning (RL), we introduce a new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks. In this problem, we consider two types of generalizations: to previously unseen instructions and to longer sequences of instructions. For generaliz...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i3.16375